670 research outputs found

    Upper and Lower Bounds for Weak Backdoor Set Detection

    Full text link
    We obtain upper and lower bounds for running times of exponential time algorithms for the detection of weak backdoor sets of 3CNF formulas, considering various base classes. These results include (omitting polynomial factors), (i) a 4.54^k algorithm to detect whether there is a weak backdoor set of at most k variables into the class of Horn formulas; (ii) a 2.27^k algorithm to detect whether there is a weak backdoor set of at most k variables into the class of Krom formulas. These bounds improve an earlier known bound of 6^k. We also prove a 2^k lower bound for these problems, subject to the Strong Exponential Time Hypothesis.Comment: A short version will appear in the proceedings of the 16th International Conference on Theory and Applications of Satisfiability Testin

    Polynomial fixed-parameter algorithms : a case study for longest path on interval graphs.

    Get PDF
    We study the design of fixed-parameter algorithms for problems already known to be solvable in polynomial time. The main motivation is to get more efficient algorithms for problems with unattractive polynomial running times. Here, we focus on a fundamental graph problem: Longest Path; it is NP-hard in general but known to be solvable in O(n^4) time on n-vertex interval graphs. We show how to solve Longest Path on Interval Graphs, parameterized by vertex deletion number k to proper interval graphs, in O(k^9n) time. Notably, Longest Path is trivially solvable in linear time on proper interval graphs, and the parameter value k can be approximated up to a factor of 4 in linear time. From a more general perspective, we believe that using parameterized complexity analysis for polynomial-time solvable problems offers a very fertile ground for future studies for all sorts of algorithmic problems. It may enable a refined understanding of efficiency aspects for polynomial-time solvable problems, similarly to what classical parameterized complexity analysis does for NP-hard problems

    Parameterizing by the Number of Numbers

    Full text link
    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called, "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for ILPF to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable

    Explaining Snapshots of Network Diffusions: Structural and Hardness Results

    Full text link
    Much research has been done on studying the diffusion of ideas or technologies on social networks including the \textit{Influence Maximization} problem and many of its variations. Here, we investigate a type of inverse problem. Given a snapshot of the diffusion process, we seek to understand if the snapshot is feasible for a given dynamic, i.e., whether there is a limited number of nodes whose initial adoption can result in the snapshot in finite time. While similar questions have been considered for epidemic dynamics, here, we consider this problem for variations of the deterministic Linear Threshold Model, which is more appropriate for modeling strategic agents. Specifically, we consider both sequential and simultaneous dynamics when deactivations are allowed and when they are not. Even though we show hardness results for all variations we consider, we show that the case of sequential dynamics with deactivations allowed is significantly harder than all others. In contrast, sequential dynamics make the problem trivial on cliques even though it's complexity for simultaneous dynamics is unknown. We complement our hardness results with structural insights that can help better understand diffusions of social networks under various dynamics.Comment: 14 pages, 3 figure

    A shortcut to (sun)flowers: Kernels in logarithmic space or linear time

    Full text link
    We investigate whether kernelization results can be obtained if we restrict kernelization algorithms to run in logarithmic space. This restriction for kernelization is motivated by the question of what results are attainable for preprocessing via simple and/or local reduction rules. We find kernelizations for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of hitting and packing problems in graphs, each running in logspace. Additionally, we return to the question of linear-time kernelization. For d-Hitting Set(k) a linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We give a simpler procedure and save a large constant factor in the size bound. Furthermore, we show that we can obtain a linear-time kernel for d-Set Packing(k) as well.Comment: 18 page

    On the (non-)existence of polynomial kernels for Pl-free edge modification problems

    Full text link
    Given a graph G = (V,E) and an integer k, an edge modification problem for a graph property P consists in deciding whether there exists a set of edges F of size at most k such that the graph H = (V,E \vartriangle F) satisfies the property P. In the P edge-completion problem, the set F of edges is constrained to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no constraint is imposed on F in the P edge-edition problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity. When parameterized by the size k of the edge set F, it has been proved that if P is an hereditary property characterized by a finite set of forbidden induced subgraphs, then the three P edge-modification problems are FPT. It was then natural to ask whether these problems also admit a polynomial size kernel. Using recent lower bound techniques, Kratsch and Wahlstrom answered this question negatively. However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that parameterized cograph edge modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the Pl-free and Cl-free edge-deletion problems for large enough l

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time

    Get PDF
    Given an undirected graph, the task in Cluster Editing is to insert and delete a minimum number of edges to obtain a cluster graph, that is, a disjoint union of cliques. In the weighted variant each vertex pair comes with a weight and the edge modifications have to be of minimum overall weight. In this work, we provide the first polynomial-time algorithm to apply the following data reduction rule of Böcker et al. [Algorithmica, 2011] for Weighted Cluster Editing: For a graph G=(V,E)G = (V,E), merge a vertex set S⊆VS ⊆ V into a single vertex if the minimum cut of G[S] is at least the combined cost of inserting all missing edges within G[S] plus the cost of cutting all edges from S to the rest of the graph. Complementing our theoretical findings, we experimentally demonstrate the effectiveness of the data reduction rule, shrinking real-world test instances from the PACE Challenge 2021 by around 24% while previous heuristic implementations of the data reduction rule only achieve 8%

    Evaluation of ILP-based approaches for partitioning into colorful components

    Get PDF
    The NP-hard Colorful Components problem is a graph partitioning problem on vertex-colored graphs. We identify a new application of Colorful Components in the correction of Wikipedia interlanguage links, and describe and compare three exact and two heuristic approaches. In particular, we devise two ILP formulations, one based on Hitting Set and one based on Clique Partition. Furthermore, we use the recently proposed implicit hitting set framework [Karp, JCSS 2011; Chandrasekaran et al., SODA 2011] to solve Colorful Components. Finally, we study a move-based and a merge-based heuristic for Colorful Components. We can optimally solve Colorful Components for Wikipedia link correction data; while the Clique Partition-based ILP outperforms the other two exact approaches, the implicit hitting set is a simple and competitive alternative. The merge-based heuristic is very accurate and outperforms the move-based one. The above results for Wikipedia data are confirmed by experiments with synthetic instances

    A Generalization of Nemhauser and Trotter\u27s Local Optimization Theorem

    Get PDF
    The Nemhauser-Trotter local optimization theorem applies to the NP-hard textsc{Vertex Cover} problem and has applications in approximation as well as parameterized algorithmics. We present a framework that generalizes Nemhauser and Trotter\u27s result to vertex deletion and graph packing problems, introducing novel algorithmic strategies based on purely combinatorial arguments (not referring to linear programming as the Nemhauser-Trotter result originally did). We exhibit our framework using a generalization of textsc{Vertex Cover}, called textrm{sc Bounded-Degree Deletion}, that has promise to become an important tool in the analysis of gene and other biological networks. For some fixed~dgeq0dgeq 0, textrm{sc Bounded-Degree Deletion} asks to delete as few vertices as possible from a graph in order to transform it into a graph with maximum vertex degree at most~dd. textsc{Vertex Cover} is the special case of d=0d=0. Our generalization of the Nemhauser-Trotter theorem implies that textrm{sc Bounded-Degree Deletion} has a problem kernel with a linear number of vertices for every constant~dd. We also outline an application of our extremal combinatorial approach to the problem of packing stars with a bounded number of leaves. Finally, charting the border between (parameterized) tractability and intractability for textrm{sc Bounded-Degree Deletion}, we provide a W[2]-hardness result for textrm{sc Bounded-Degree Deletion} in case of unbounded dd-values
    • …
    corecore